Recycled Error Bits: Energy-Efficient Architectural Support for Higher Precision Floating Point

نویسندگان

  • Ralph Nathan
  • Bryan Anthonio
  • Shih-Lien Lu
  • Helia Naeimi
  • Daniel J. Sorin
  • Xiaobai Sun
چکیده

In this work, we provide energy-efficient architectural support for floating point accuracy. Our goal is to provide accuracy that is far greater than that provided by the processor’s hardware floating point unit (FPU). Specifically, for each floating point addition performed, we “recycle” that operation’s error: the difference between the finite-precision result produced by the hardware and the result that would have been produced by an infinite-precision FPU. We make this error architecturally visible such that it can be used, if desired, by software. Experimental results on physical hardware show that software that exploits architecturally recycled error bits can achieve accuracy comparable to a 2Bbit FPU with performance and energy that are comparable to a B-bit FPU.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Half-precision Floating-point Ray Traversal

Fraction (10 bits) Sign (1 bit) Exponent (5 bits) 16-bit floating-point format defined in IEEE 754-2008 standard Storage support on most of the modern CPUs and GPUs Native computation support especially on mobile platforms (Up coming nVidia Pascal desktop GPUs are announced to have native computation support) Pros: Smaller cache footprint (compared to "regular" 32-bit floats) More energy effici...

متن کامل

An FPGA-Based Face Detector Using Neural Network and a Scalable Floating Point Unit

The study implemented an FPGA-based face detector using Neural Networks and a scalable Floating Point arithmetic Unit (FPU). The FPU provides dynamic range and reduces the bit of the arithmetic unit more than fixed point method does. These features led to reduction in the memory so that it is efficient for neural networks system with large size data bits. The arithmetic unit occupies 39~45% of ...

متن کامل

Comparison of Adders for optimized Exponent Addition circuit in IEEE754 Floating point multiplier using VHDL

Floating point arithmetic has a vast applications in DSP, digital computers, robots due to its ability to represent very small numbers and big numbers as well as signed numbers and unsigned numbers. In spite of complexity involved in floating point arithmetic, its implementation is increasing day by day. Here we compare three different types of adders while calculating the addition of exponent ...

متن کامل

On Floating-Point Normal Vectors

In this paper we analyze normal vector representations. We derive the error of the most widely used representation, namely 3D floating-point normal vectors. Based on this analysis, we show that, in theory, the discretization error inherent to single precision floating-point normals can be achieved by 250.2 uniformly distributed normals, addressable by 51 bits. We review common sphere parameteri...

متن کامل

Optimizing the representation of intervals

A representation of intervals is proposed that, instead of both end points, uses the low point and the width of the interval. This proposed representation is more efficient since both end points have many corresponding bits that are equal. Consequently, the width of the interval can be represented with a smaller number of bits than an end point, resulting in a better utilization of the number o...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1309.7321  شماره 

صفحات  -

تاریخ انتشار 2013